55 research outputs found

    Disentangling Factors of Variation by Mixing Them

    Full text link
    We propose an approach to learn image representations that consist of disentangled factors of variation without exploiting any manual labeling or data domain knowledge. A factor of variation corresponds to an image attribute that can be discerned consistently across a set of images, such as the pose or color of objects. Our disentangled representation consists of a concatenation of feature chunks, each chunk representing a factor of variation. It supports applications such as transferring attributes from one image to another, by simply mixing and unmixing feature chunks, and classification or retrieval based on one or several attributes, by considering a user-specified subset of feature chunks. We learn our representation without any labeling or knowledge of the data domain, using an autoencoder architecture with two novel training objectives: first, we propose an invariance objective to encourage that encoding of each attribute, and decoding of each chunk, are invariant to changes in other attributes and chunks, respectively; second, we include a classification objective, which ensures that each chunk corresponds to a consistently discernible attribute in the represented image, hence avoiding degenerate feature mappings where some chunks are completely ignored. We demonstrate the effectiveness of our approach on the MNIST, Sprites, and CelebA datasets.Comment: CVPR 201

    Challenges in Disentangling Independent Factors of Variation

    Full text link
    We study the problem of building models that disentangle independent factors of variation. Such models could be used to encode features that can efficiently be used for classification and to transfer attributes between different images in image synthesis. As data we use a weakly labeled training set. Our weak labels indicate what single factor has changed between two data samples, although the relative value of the change is unknown. This labeling is of particular interest as it may be readily available without annotation costs. To make use of weak labels we introduce an autoencoder model and train it through constraints on image pairs and triplets. We formally prove that without additional knowledge there is no guarantee that two images with the same factor of variation will be mapped to the same feature. We call this issue the reference ambiguity. Moreover, we show the role of the feature dimensionality and adversarial training. We demonstrate experimentally that the proposed model can successfully transfer attributes on several datasets, but show also cases when the reference ambiguity occurs.Comment: Submitted to ICLR 201

    FaceShop: Deep Sketch-based Face Image Editing

    Get PDF
    We present a novel system for sketch-based face image editing, enabling users to edit images intuitively by sketching a few strokes on a region of interest. Our interface features tools to express a desired image manipulation by providing both geometry and color constraints as user-drawn strokes. As an alternative to the direct user input, our proposed system naturally supports a copy-paste mode, which allows users to edit a given image region by using parts of another exemplar image without the need of hand-drawn sketching at all. The proposed interface runs in real-time and facilitates an interactive and iterative workflow to quickly express the intended edits. Our system is based on a novel sketch domain and a convolutional neural network trained end-to-end to automatically learn to render image regions corresponding to the input strokes. To achieve high quality and semantically consistent results we train our neural network on two simultaneous tasks, namely image completion and image translation. To the best of our knowledge, we are the first to combine these two tasks in a unified framework for interactive image editing. Our results show that the proposed sketch domain, network architecture, and training procedure generalize well to real user input and enable high quality synthesis results without additional post-processing.Comment: 13 pages, 20 figure

    Biological Activities of Chinese Propolis and Brazilian Propolis on Streptozotocin-Induced Type 1 Diabetes Mellitus in Rats

    Get PDF
    Propolis is a bee-collected natural product and has been proven to have various bioactivities. This study tested the effects of Chinese propolis and Brazilian propolis on streptozotocin-induced type 1 diabetes mellitus in Sprague-Dawley rats. The results showed that Chinese propolis and Brazilian propolis significantly inhibited body weight loss and blood glucose increase in diabetic rats. In addition, Chinese propolis-treated rats showed an 8.4% reduction of glycated hemoglobin levels compared with untreated diabetic rats. Measurement of blood lipid metabolism showed dyslipidemia in diabetic rats and Chinese propolis helped to reduce total cholesterol level by 16.6%. Moreover, oxidative stress in blood, liver and kidney was improved to various degrees by both Chinese propolis and Brazilian propolis. An apparent reduction in levels of alanine transaminase, aspartate transaminase, blood urea nitrogen and urine microalbuminuria-excretion rate demonstrated the beneficial effects of propolis in hepatorenal function. All these results suggested that Chinese propolis and Brazilian propolis can alleviate symptoms of diabetes mellitus in rats and these effects may partially be due to their antioxidant ability

    Development and validation of a dynamic nomogram based on conventional ultrasound and contrast-enhanced ultrasound for stratifying the risk of central lymph node metastasis in papillary thyroid carcinoma preoperatively

    Get PDF
    PurposeThe aim of this study was to develop and validate a dynamic nomogram by combining conventional ultrasound (US) and contrast-enhanced US (CEUS) to preoperatively evaluate the probability of central lymph node metastases (CLNMs) for patients with papillary thyroid carcinoma (PTC).MethodsA total of 216 patients with PTC confirmed pathologically were included in this retrospective and prospective study, and they were divided into the training and validation cohorts, respectively. Each cohort was divided into the CLNM (+) and CLNM (−) groups. The least absolute shrinkage and selection operator (LASSO) regression method was applied to select the most useful predictive features for CLNM in the training cohort, and these features were incorporated into a multivariate logistic regression analysis to develop the nomogram. The nomogram’s discrimination, calibration, and clinical usefulness were assessed in the training and validation cohorts.ResultsIn the training and validation cohorts, the dynamic nomogram (https://clnmpredictionmodel.shinyapps.io/PTCCLNM/) had an area under the receiver operator characteristic curve (AUC) of 0.844 (95% CI, 0.755–0.905) and 0.827 (95% CI, 0.747–0.906), respectively. The Hosmer–Lemeshow test and calibration curve showed that the nomogram had good calibration (p = 0.385, p = 0.285). Decision curve analysis (DCA) showed that the nomogram has more predictive value of CLNM than US or CEUS features alone in a wide range of high-risk threshold. A Nomo-score of 0.428 as the cutoff value had a good performance to stratify high-risk and low-risk groups.ConclusionA dynamic nomogram combining US and CEUS features can be applied to risk stratification of CLNM in patients with PTC in clinical practice
    corecore